Back to Projects

Micro Foods Market - Distributed Back-End

Project: Microservices Architecture (CSE 380, Spring 2025)

Flask Python Docker Microservices SQLite JWT

This extensive project required building the complete back-end infrastructure for a grocery store using a Microservices Architecture. The primary challenge was containerizing five separate Flask applications, managing inter-service communication over a shared Docker network, and ensuring secure authentication using JWTs and role-based authorization (employee vs. customer).

Each service manages its own data persistence using local SQLite databases, enforcing the principle of decentralized data ownership characteristic of microservices.

System Architecture

The system was deployed as five separate Docker containers, communicating over a shared bridge network. This separation enforced decoupling and established clear API contracts between services.

1. User Management

Handles user creation, login, password hashing, and JWT generation.

2. Product Management

Manages product state. Only employees may modify products.

3. Product Searching

Queries products and retrieves metadata via the Logging Service.

4. Ordering

Processes customer orders and calculates final totals.

5. Logging

Centralized sink for all successful system events.

Key Implementation Challenges

Inter-Service Authorization Flow

The primary technical hurdle was implementing secure communication chains. When a customer attempts to edit a product on the Product Management Service, the following internal requests are triggered:

  1. Client to Product Service: HTTP request with JWT in the header.
  2. Product Service to User Service (Internal): Requests validation of the JWT and confirmation of employee status.
  3. User Service: Decodes JWT, checks database for employee status, and returns boolean authorization result.
  4. Product Service: If authorized, proceeds with the database update and sends a log POST request to the Logging Service.

Dockerization and Deployment Pipeline

The solution required five separate `Dockerfile` configurations (`Dockerfile.users`, etc.) built off the same Python base image. The final deployment relied on a single `compose.yaml` file to orchestrate all five containers onto a shared network, ensuring they could resolve each other by their container names (e.g., `http://user:9000/auth_check`). This setup was essential for achieving a reliable, repeatable test environment.